Neural Computation
● MIT Press
Preprints posted in the last 30 days, ranked by how well they match Neural Computation's content profile, based on 36 papers previously published here. The average preprint has a 0.03% match score for this journal, so anything above that is already an above-average fit.
Sun, G.; Huang, N.; Yan, H.; Zhou, J.; Li, Q.; Lei, B.; Zhong, Y.; Wang, L.
Show abstract
Generalization is a fundamental criterion for evaluating learning effectiveness, a domain where biological intelligence excels yet artificial intelligence continues to face challenges. In biological learning and memory, the well-documented spacing effect shows that appropriately spaced intervals between learning trials can significantly improve behavioral performance. While multiple theories have been proposed to explain its underlying mechanisms, one compelling hypothesis is that spaced training promotes integration of input and innate variations, thereby enhancing generalization to novel but related scenarios. Here we examine this hypothesis by introducing a bio-inspired spacing effect into artificial neural networks, integrating input and innate variations across spaced intervals at the neuronal, synaptic, and network levels. These spaced ensemble strategies yield significant performance gains across various benchmark datasets and network architectures. Biological experiments on Drosophila further validate the complementary effect of appropriate variations and spaced intervals in improving generalization, which together reveal a convergent computational principle shared by biological learning and machine learning.
Lorenzi, R. M.; De Grazia, M.; Gandini Wheeler-Kingshott, C. A. M.; Palesi, F.; D'Angelo, E. U.; Casellato, C.
Show abstract
A mean field model (MFM) is a mesoscopic description of neuronal population dynamics that can reduce the complexity of neural microcircuits into equations preserving key functional properties. The generation of a MFM is a complex mathematical process that starts with the incorporation of single neuron input/output relationships and local connectivity. Once neuron electroresponsiveness and synaptic properties are defined, in principle, the process can be automatized. Here we develop a tool for automatic MFM derivation from biophysically grounded spiking networks (Auto-MFM) by performing micro-to-mesoscale parameter remapping, estimating input/output relationships specific for different neuronal populations (i.e., transfer functions), and optimizing transfer function parameters. Auto-MFM was tested using a spiking cerebellar circuit as a generative model. The cerebellar MFM derived with Auto-MFM accurately reproduced cerebellar population dynamics of the corresponding spiking network, matching mean and time-varying firing rates across a wide range of stimulation patterns. Auto-MFM allowed us to model and explore physiological and pathological circuit variants; indeed, it was used to map ataxia-related structural connectivity alterations of the cerebellar network, in which Purkinje cells with simplified dendritic structure altered the cerebellar connectivity. Furthermore, Auto-MFM was used to create a library of cerebellar MFMs by sweeping the level of the excitatory conductance at mossy fiber - granule cell synapse, which is altered in several neuropathologies. Auto-MFM is thus proving a flexible and powerful tool to generate region-specific MFMs of healthy and pathological brain networks to be embedded in brain digital models.
Tomko, M.; Lupascu, C. A.; Filipova, A.; Jedlicka, P.; Lacinova, L.; Migliore, M.
Show abstract
BackgroundFlexibility and robustness of neuronal function are closely linked to degeneracy, the ability of distinct structural or parametric configurations to produce similar functional outcomes. At the cellular level, this often manifests as ion-channel degeneracy, in which multiple combinations of intrinsic conductances yield comparable electrophysiological phenotypes. MethodologyWe used a population-based, data-driven modelling framework to generate large ensembles of biophysically detailed CA1 pyramidal neuron models constrained by somatic electrophysiological features extracted from patch-clamp recordings in acute slices from early-birth rats. 10 reconstructed morphologies were incorporated, and model populations were analyzed using parameter correlation analysis, principal component analysis, and generalization tests to assess robustness, degeneracy, and morphology dependence of intrinsic properties. ConclusionsAcross the model population, similar somatic firing behaviours emerged from widely different combinations of intrinsic parameters, demonstrating robust two-level ion channel degeneracy both within and across morphologies. Each morphology occupied a distinct region of parameter space, indicating morphology-specific compensatory effects, while weak pairwise parameter correlations suggested distributed compensation rather than tight parameter dependencies. Even with a fixed morphology, multiple parameter subspaces supported comparable electrophysiological phenotypes. Generalization across morphologies was structure-dependent and non-reciprocal, with successful parameter similarity occurring preferentially between structurally similar neurons. Interestingly, to accurately simulate spike-frequency adaptation, it was important to retain some kinetic properties of the ion channel models as free parameters during optimization. Together, these findings show that dendrite morphology shapes the valid parameter space, and similar electrophysiology of CA1 pyramidal neurons arises from the interplay between structural variability and ion-channel diversity. This work highlights the importance of population-based modelling for capturing biological variability and provides insights into how neuronal robustness might be maintained despite substantial heterogeneity, and offers a scalable pipeline for generating biophysically realistic CA1 neuron populations for use in network simulations. Author summaryNeurons must reliably process information even though their internal components, such as ion channels and cellular shape, can vary widely from cell to cell. How stable behaviour emerges from such variability is a fundamental question in neuroscience. In this study, we explored this problem using detailed computer models of early-birth rat hippocampal CA1 pyramidal neurons, a cell type that plays a central role in learning and memory. Instead of building a single "average" neuron model, we created large populations of models that all reproduced key experimental recordings but differed in their internal parameters. We found that neurons with different shapes and different combinations of ion channels could nevertheless generate similar electrical activity. This phenomenon, known as ion channel degeneracy, allows neurons to remain functional despite biological variability or perturbations. Our results show that neuronal shape strongly influences which parameter combinations are viable, but that multiple solutions exist even for the same morphology. The population of models we provide offers a resource for future studies of early-birth CA1 pyramidal cell function and dysfunction.
Santhosh, A.; Narayanan, R.
Show abstract
Artificial recurrent networks are powerful models for studying neural dynamics and representations underlying complex cognitive tasks. However, the impact of neural-circuit heterogeneities on learning, dynamics, robustness, and generalization in these networks remains poorly understood. Here, we systematically investigated the impact of graded intrinsic heterogeneities in artificial recurrent networks trained on different cognitive tasks using reward- modulated Hebbian learning. Across networks trained with distinct hyperparameters and different levels of intrinsic heterogeneity, we observed pronounced network-to-network and task-to-task variability in training convergence, error dynamics during training, and task performance. These effects were strongly task dependent, with memory-dependent tasks exhibiting greater sensitivity to heterogeneity than memoryless tasks. We assessed these networks for robustness to multiple forms of graded post-training perturbations. Perturbations to intrinsic time constant distributions altered network dynamics, but had limited impact on final task accuracy in most cases. In contrast, perturbations to initial conditions, exploratory activity impulses, or task epoch durations strongly affected memory-dependent tasks. Among all perturbations, synaptic jitter was consistently the most detrimental, impairing performance across all tasks and heterogeneity levels. Importantly, despite such pronounced impact of heterogeneities, none of the metrics (spanning training, performance, dynamics, and robustness) varied monotonically with the level of training heterogeneity, instead showing additional dependencies on task demands, network configuration, and perturbation type. Finally, networks trained on a single task were able to perform structurally related untrained tasks, but failed on fundamentally distinct tasks. Strikingly, similar task performances emerged from divergent activity trajectories across networks and training conditions, together revealing pronounced functional degeneracy in network dynamics. Collectively, our findings establish that heterogeneous recurrent networks operate in a complex systems regime, where robust function emerges from non-unique, task-specific interactions among hyperparameters, dynamics, and heterogeneities. Our analyses emphasize the need for population- of-networks approaches that focus on interactions among multiple forms of neural heterogeneities in shaping learning and computation.
Hauge, E.; Saetra, M. J.; Einevoll, G.; Halnes, G.
Show abstract
Neuronal activity alters extracellular ion concentrations and electric potentials. Ephaptic effects refer to the feedback influence that these extracellular changes can have on neuronal activity. While electric ephaptic effects occur on a fast timescale due to extracellular potential perturbations, ionic ephaptic effects are driven by slower, accumulative changes in ion concentrations. Among the previous computational studies of ephaptic effects, the vast majority have focused exclusively on electric effects, while ionic ephaptic effects have largely been neglected. In this work, we present an electrodiffusive computational framework consisting of two-compartment neurons that interact via a shared extracellular space. By accounting for both electric potentials and ion-concentration dynamics in a self-consistent manner, our framework enables us to explore the relative roles of electric and ionic ephaptic effects. Through numerical experiments, we demonstrate that ionic and electric ephaptic interactions play very different roles. While ionic ephaptic interactions increase population firing rates, electric ephaptic interactions primarily drive subtle shifts in spike timing. Furthermore, we show that these spike shifts cause the phase difference (the distance in spike times between a small collection of neurons) to converge to a stable, unique phase difference, which we coin the ephaptic intrinsic phase preference. Author summaryNeurons predominantly communicate through synapses: specialized contact points where a brief electrical signal, known as a spike or action potential, in one neuron influences another. Neurons generate these spikes by exchanging ions with the surrounding extracellular space. This way, spiking neurons alter extracellular ion concentrations and electric potentials. Since neurons are sensitive to such changes in their environment, they can also influence one another indirectly through the shared extracellular medium. This form of non-synaptic interaction is known as ephaptic coupling. Most computational models of neuronal activity neglect ephaptic interactions, and those that include them typically consider only electric effects while ignoring ionic contributions. As a result, the relative roles of electric and ionic ephaptic effects remain poorly understood. Here, we introduce a computational framework that accounts for both mechanisms in a self-consistent way. Our results show a functional distinction: ionic ephaptic effects act slowly, regulating population firing rates, whereas electric ephaptic effects act on millisecond timescales and subtly shift spike timing. These shifts cause spike-time differences between neurons to converge to a stable value, a phenomenon we call ephaptic intrinsic phase preference.
Tar, L.; Saray, S.; Mohacsi, M.; Freund, T. F.; Kali, S.
Show abstract
Anatomically and biophysically detailed models of neurons have been widely used to study information processing in these cells. Most studies focused on understanding specific phenomena, while more general models that aim to capture various cellular processes simultaneously remain rare even though such models are required to predict neuronal behavior under more complex, natural conditions. In this study, we aimed to develop a detailed, data-driven, general-purpose biophysical model of hippocampal CA1 pyramidal neurons. We leveraged extensive morphological, biophysical and physiological data available for this cell type, and established a systematic workflow for model construction and validation that relies on our recently developed software tools. The model is based on a high-quality morphological reconstruction and includes a diverse curated set of ion channel models. After incorporating the available constraints on the distribution of ion channels, the remaining free parameters were optimized using the Neuroptimus tool to fit a variety of electrophysiological features extracted from somatic whole-cell recordings. Validation using HippoUnit confirmed the models ability to replicate key electrophysiological features, including somatic voltage responses to current input, the attenuation of synaptic potentials and backpropagating action potentials, and nonlinear synaptic integration in oblique dendrites. Our model also included active dendritic spines, modeled either explicitly or by merging their biophysical mechanisms into those of the parent dendrite. We found that many aspects of neuronal behavior were unaffected by the level of detail in modeling spines, but modeling nonlinear synaptic integration accurately required the explicit modeling of spines. Our data-driven model of CA1 pyramidal cells matching diverse experimental constraints is a general tool for the investigation of the activity and plasticity of these cells and can also be a reliable component of detailed models of the hippocampal network. Our systematic approach to building and validating general-purpose models should apply to other cell types as well. Author SummaryThe brain processes information through the activity of billions of individual neurons. To understand how these cells work, scientists build detailed computer models that reproduce their electrical behavior. These models make it possible to explore situations that are difficult or impossible to test experimentally. However, many existing neuron models were designed to explain only a few specific phenomena, which limits their usefulness in more complex settings. In this study, we developed a comprehensive computer model of a hippocampal CA1 pyramidal neuron, a cell type that plays a central role in learning and memory. We built the model using extensive experimental data and applied automated methods to ensure that it reproduces a broad range of observed neuronal behaviors. We also examined how small structures called dendritic spines--tiny protrusions where most synaptic communication occurs--affect how neurons combine incoming signals. We found that even simplified models without individual spines can capture many aspects of neuronal activity, but understanding more complex forms of signal integration requires modeling spines explicitly. Our work also supports the development of more realistic simulations of brain circuits.
Sar, G. K.; Patton, A.; Towlson, E.; Davidsen, J.
Show abstract
A central question in neuroscience is how neural processing generates or encodes behavior. Caenorhabditis elegans is well suited to addressing this question, given its compact nervous system and near-complete structural connectome. Despite this, findings from previous studies remain inconclusive. While some have shown that the connectome can robustly encode specific behaviors such as locomotion, others report that functional connectivity can be reconfigured across behaviors. We aim to understand the relationship between structural connectivity, functional connectivity and biological behavior in silico by using an experimentally motivated computational model leveraging the structural connectome. Stimulation of specific neurons in the model induces oscillatory neural responses, enabling us to infer neuronal functional connectivity. Functional connectivity is found to be stronger among some neurons, allowing us to identify functional communities. We find that electrical synapses play a critical role in determining functional communities, and the resulting mesoscale functional architecture is predominantly gap junctionally assortative. Furthermore, comparison with behavioral circuits shows that locomotion circuits are largely segregated into distinct functional communities while other circuits are more distributed across multiple functional communities. We also observe that stimulation of neurons belonging to these distributed circuits elicits a more synchronized neuronal response compared to stimulation of neurons within the more segregated circuits. This is consistent with the presence of behavioral patterns that originate in one circuit and terminate in another (e.g., chemosensation leading to locomotion), such that stimulation of one circuit can activate the other and eventually result in a synchronized response. We also find a large repertoire of chimera-like synchronization patterns upon stimulation of certain behavioral circuits (chemosensation, mechanosensation) indicating high dynamical flexibility. Overall, our results demonstrate that while certain behaviors are governed by functionally segregated circuits, others emerge from the synchronization of multiple functional communities, which are, to begin with, influenced by the underlying structural connectivity. Author summaryAnimals constantly transform sensory inputs into actions, but it is still unclear how this mapping from neural activity to behavior is implemented in a real nervous system. Caenorhabditis elegans offers a unique testbed for this question because its entire wiring diagram is nearly completely mapped. Yet, previous works have reached mixed conclusions about how well this anatomical circuit diagram predicts actual patterns of activity and behavior. Here, we use a biologically inspired computational model of the C. elegans nervous system to bridge this gap between structure, function, and behavior. By virtually stimulating individual neurons and observing the resulting network-wide oscillations, we infer how strongly different pairs and groups of neurons interact in functional terms. We then use network analysis tools to identify groups of neurons that tend to co-activate, and relate these functional communities to known behavioral circuits for locomotion and sensory processing. We find that gap junctions play a key role in shaping functional communities, and that locomotion-related neurons are more functionally segregated than neurons involved in other behaviors, which are more functionally distributed. Our results suggest that some behaviors rely on specialized, functionally isolated circuits, whereas others emerge from the coordinated activity of multiple functional communities.
Schmitt, F. J.; Müller, F. L.; Nawrot, M. P.
Show abstract
Neural population activity typically evolves on low-dimensional manifolds and can be described as trajectories in attractor-like state spaces, including metastable switching among quasi-stable assembly states. Here we develop a unified definition of clustered neural networks with local excitatory-inhibitory balance in which enhanced within-cluster effective coupling can be realized by connection probability (structural clustering), synaptic efficacy (weight clustering), or any mixture of both. We introduce a single mixing parameter{kappa} [isin] [0, 1] that redistributes a defined clustering contrast between connection probabilities and synaptic efficacies while preserving the mean input of a balanced random network. Using mean-field theory and network simulations, we show that metastable dynamics are supported across the full{kappa} continuum. Shifting contrast between structural and weight clustering changes higher-order input structure, reshaping multistable regimes, neuronal correlations, and the balance between single- and multi-cluster episodes. Because real nervous systems jointly organize topology and synaptic strength, our approach provides a biologically realistic assembly definition and a basis for future models combining structural and functional plasticity. In practical terms,{kappa} offers a translation axis for neuromorphic and other constrained substrates, clarifying trade-offs between routing resources and synaptic weight resolution when implementing attractor-based computational primitives such as winner-take-all decisions and working-memory states for artificial agents.
Vloeberghs, R.; Tuerlinckx, F.; Urai, A. E.; Desender, K.
Show abstract
A widely used framework for studying the computational mechanisms of decision making is the Drift Diffusion Model (DDM). To account for the presence of both fast and slow errors in empirical data, the DDM incorporates across-trial variability in parameters such as the drift rate and the starting point. Although these variability parameters enable the model to reproduce both fast and slow errors, they rely on the assumption that over trials each parameter is independently sampled. As a result, the DDM effectively predicts that errors-- whether fast or slow--occur randomly over time. However, in empirical data this assumption is violated, as error responses are often temporally clustered. To address this limitation, we introduce the autocorrelated DDM, in which trial-to-trial fluctuations in drift rate, starting point, and boundary evolve according to first-order autoregressive (AR1) processes. Using simulations, we demonstrate that, unlike the across-trial variability DDM, the autocorrelated DDM naturally accounts for temporal clustering of errors. We further show that model parameters can be reliably recovered using Amortized Bayesian Inference, even with as few as 500 trials. Finally, fits to empirical data indicate that the autocorrelated DDM provides the best account of error clustering, highlighting that computational parameters fluctuate over time, despite typically being estimated as fixed across trials.
McAllister, J.; Houghton, C. J.; Wade, J.; O'Donnell, C.
Show abstract
The connectivity of brain networks is extremely sparse due to metabolic, physical and spatial constraints. Although wiring sparsity can confer computational advantages for biological and artificial neural networks, sparse networks require fine parameter tuning and exhibit strong sensitivity to perturbations. How brains achieve their efficiency and robustness is unclear. Here we addressed this by analysing the dynamical properties of Echo State Networks with wiring based on the Drosophila melanogaster fruit fly connectome, compared with sparsity-matched random-wiring networks. We evaluated these networks on a set of eight cognitive tasks, and found that connectome-based neural networks (CoNNs) typically showed narrowly distributed task engagement across their neurons. The importance of a neuron for task performance correlated with its node degree, local clustering, and selfrecurrency, and these correlations were stronger in CoNNs than in random networks. CoNNs were more robust to neuronal loss, retaining their task performance and beneficial dynamical properties such as criticality and spectral radius better than random networks. Similarly, CoNNs were more robust to hyperparameter variations in both input and recurrent weight scaling. Using theoretical arguments and numerical simulations, we show that excess CoNN node self-recurrency is sufficient to explain this enhanced robustness. Overall, these results identify non-random features of connectome wiring that allow brains to reconcile extreme sparsity with reliable computation. SignificanceBrain networks support robust computation even though they operate under extreme wiring sparsity due to metabolic and spatial constraints. While sparse networks typically require fine-tuning and are sensitive to perturbations, we show that biological connectomes support specialised, efficient task engagement and remain robust to neuron loss and parameter variation. We identify excess neuronal selfrecurrency as a key structural feature underlying this stability. These results reveal how non-random connectivity stabilises computation in extremely sparse networks, providing principles for understanding brain function and designing robust, efficient artificial neural systems.
Gambrell, O.; Singh, A.
Show abstract
A key component of intraneuronal communication is the modulation of postsynaptic firing frequencies by stochastic transmitter release from presynaptic neurons. The time interval between successive postsynaptic firings is called the inter-spike interval (ISI), and understanding its statistics is integral to neural information processing. We start with a model of an excitatory chemical synapse with postsynaptic neuron firing governed as per a classical integrate-and-fire model. Using a first-passage time framework, we derive exact analytical results for the ISI statistical moments, revealing parameter regimes driving precision in postsynaptic action potential timing. Next, we extended this analysis to include both an excitatory and an inhibitory presynaptic connection onto the same postsynaptic neuron. We consider both a fixed postsynaptic-firing threshold and a threshold that adapts based on the postsynaptic membrane potential history. Our analysis shows that the latter adaptive threshold can result in scenarios where increasing the inhibitory input frequency increases the postsynaptic firing frequency. Moreover, we characterize parameter regimes where ISI noise is hypo-exponential or hyperexponential based on its coefficient of variation being less than or higher than one, respectively.
Gupta, R.; Karmeshu, ; Singh, R. K. B.
Show abstract
Voltage perturbations to a repetitively firing Hodgkin-Huxley (HH) model of neuronal spiking in the bistable regime with coexisting limit cycle and stable steady node can either lead to the spikes phase resetting or collapse to the stable steady state. The latter describes a non-firing hyperpolarized quiescent state of the neuron despite the presence of constant external current. Using asymptotic phase response curve (PRC), the impact of voltage perturbations on a repetitively firing HH model is studied here while it is diffusively coupled to another HH model under identical external stimulation. It is observed that the pre-perturbation state of synchronization and the coupling strength critically determine the PRC response of the perturbed HH dynamics. Higher coupling strengths of perfectly in-phase (anti-phase) synchronized HH models shrink (expand) the combinatorial space of perturbation strengths and the oscillation phases causing collapse to the quiescent state. This indicates reduced (enlarged) basin of attraction, viz. the null space, associated with the steady state in the HH phase space. The findings bear important implications to the spiking dynamics of diverse interneurons, as well as special cases of pyramidal neurons, coupled through electrical synapses via. gap junctions, and suggest the role of gap junction plasticity in tuning vulnerability to quiescent state in the presence of biological noise and spikelets.
Acharya, G.; Huang, A.; Santhakumar, V.; Nozari, E.
Show abstract
For decades, electrical neuromodulation has been used as a therapeutic mechanism to disrupt and desynchronize pathological neural activity in various neurological disorders. Despite notable progress, however, patient outcomes remain highly variable, particularly in medically intractable epilepsy where surgery still provides the greatest chance of seizure freedom. Here we propose passive neuromodulation (PNM) as a radical alternative to conventional neurostimulation, whereby analogue feedback is used to drain energy from an epileptic circuit and thus suppress the initiation or spread of electrographic seizures. We provide pilot evidence on the efficacy and robustness of PNM using two computational models of epileptic dynamics: a detailed biophysical network model of dentate gyrus, and the Epileptor neural mass model of seizure dynamics. Despite the vast differences between these models, our results show the robust ability of PNM to suppress seizures in both models. We further demonstrate the efficacy and robustness of responsive PNM, whereby brief (50ms) windows of PNM are triggered by a simultaneously-running seizure detection algorithm, as well as the safe and tunable nature of PNM, where more robust seizure suppression can be achieved by parametrically titrating the amount of power drained from the tissue, without inducing any seizures even if applied interictally. Overall, our results provide strong evidence on the promise of PNM for the closed-loop control of epileptic seizures and other neurological disorders where damping pathological network activity can restore healthy dynamics.
Diekmann, N.; Lissek, S.; Uengoer, M.; Cheng, S.
Show abstract
The progress of learning is usually quantified by averaging responses across participants and/or multiple trials within a block. However, such approaches obscure the trial-by-trial progress of learning, which has been shown recently to express a rich variety of dynamics. An alternative approach which does not suffer from this problem is the detection and analysis of points of behavioral change, i.e., change-point analysis. Using change-point analysis, we reanalyzed data from human participants in different predictive learning tasks in which learned contingencies underwent reversal. We find that responses of individual participants were more accurately characterized by behavioral change points than the average learning curve. Importantly, change points significantly shifted to later trials during reversal learning indicating that reversal learning is more difficult than the initial learning. In a computational model based on deep reinforcement learning, we show that the change point shift required the replay of previous experiences, which in turn depends on the hippocampus. This finding is consistent with studies showing that lesions of the hippocampus yield faster reversal learning. In summary, we reaffirm the importance of the analysis of single participant responses, show that phenomenological learning rates are slower during reversal learning, and provide a theoretical account for this difference.
Zhang, S.; Wang, H.; Mendoza, R. B.
Show abstract
Resource sharing is a fundamental form of social exchange underlying the formation and maintenance of social bonds in humans and other species. While reciprocity has long been proposed as a key mechanism in group interactions, the dynamic processes underlying resource allocation remain poorly understood. In this study, we employed computational modeling to investigate the temporal dynamics of resource sharing in a novel group decision-making task across three experiments. We found that, beyond the well-documented reciprocity, participants exhibited consistent alternating behavior, characterized by the switching between potential recipients. This alternation was not driven by fairness concerns but reflected a strategic balance between maintaining stable partnerships and exploring alternatives. Crucially, a reinforcement learning model incorporating Theory of Mind (ToM) consistently outperformed all alternative models. These findings highlight the critical role of ToM in social decision-making and suggest that mentalizing others intentions may be essential for effective resource sharing and social bond formation.
Daou, M.; Jovanic, T.; Destexhe, A.
Show abstract
Building a simple model that precisely and functionally characterizes a neuron is a challenging and important task to select the best concise and computationally efficient model. However, this type of work has only been done for subthreshold properties of neurons. Here, we take a different perspective and suggest a method to obtain point-neuron models from morphologically-detailed models with dendrites. To do this, we focus on the functional characterization of the neuron response under in vivo conditions, and compute the transfer function of the detailed model. The parameters of this transfer function, in terms of mean voltage, voltage standard deviation and correlation time, can be used to compute the "best" point-neuron model that generates a transfer function very close to that of the morphologically-detailed model. We illustrate this approach for two very different neuronal morphologies, one from Drosophila larvae and one from mammals. In conclusion, this approach provides a tool to generate point-neuron models from detailed models, based on a functional characterization of the neuron response. Significance StatementThis study provides a new computational method to reduce morphological models into point-neuron models. To do so, we calculate the transfer function parameters, ie the voltage standard deviation, the mean voltage and the correlation time, of the morphological model and fit a point neuron-model onto this data. Here, we successfully apply this approach for two very different neuron morphologies, a drosophila neuron and a rat motoneuron.
Brunton, B. W.; Abe, E. T. T.; Hu, L. J.; Tuthill, J. C.
Show abstract
Animal intelligence is not purely a product of abstract computation in the brain, but emerges from dynamic interactions between the nervous system and the body. New connectome datasets and musculoskeletal models now enable integrated, closed-loop simulations of the neural and biomechanical systems of the fruit fly Drosophila, an ideal model organism to investigate embodied intelligence. However, many biological parameters of the nervous system and the body, as well as how they interface, remain unknown. To fill such gaps, researchers are turning to deep reinforcement learning (DRL), a data-driven optimization framework, to create virtual animals that imitate the behavior of real animals. Here, we provide a cautionary tale about the interpretation of such models. We constructed a virtual chimera of two phylogenetically distant species: a connectome of the C. elegans nematode worm and a biomechanical model of the fly body. The worm connectome receives sensory information from the fly body, and an artificial neural network is trained with DRL to map worm motor neuron activations to the flys leg actuators. The resulting digital sphinx produces highly realistic fly walking--yet it is biologically meaningless. This exercise teaches us nothing about either animal and exposes a core peril of connectome-body models: behavioral fidelity is achievable without biological fidelity, making such models easy to overinterpret. Done carefully, virtual animals can be powerful partners to biological experiments, but only if their components and interfaces are grounded in biology.
Sheheitli, H.; Johnson, L. A.; Wang, J.; Aman, J. E.; Vitek, J. L.
Show abstract
Local field potentials recorded from the subthalamic nucleus (STN) in Parkinsons disease (PD) exhibit a distinctive multiscale spectral signature: exaggerated beta-band oscillations (13-30 Hz) coupled to high-frequency oscillations (HFOs, 200-400 Hz), with HFO amplitude being phase-locked to the beta cycle. This phase-amplitude coupling (PAC) has been identified as a promising biomarker of the parkinsonian state, yet no biophysical model has explained how it emerges, what determines the HFO frequency, or how HFOs can exist without beta modulation in the medicated STN. Here we show that a heterogeneous population of excitatory Izhikevich neurons with recurrent coupling produces three dynamical regimes: (i) asynchronous tonic firing, (ii) asynchronous bursting, in which neurons burst individually producing broadband HFO power but without coherent population-level PAC, and (iii) synchronous bursting, which gives rise to beta-HFO PAC. The regimes are governed by two biophysically interpretable parameters that capture complementary effects of dopamine depletion: one reflecting changes in intrinsic neuronal excitability, the other reflecting changes in synaptic coupling strength. The transition from asynchronous to synchronous bursting in this model captures the emergence of pathological STN neuronal activity in the parkinsonian state. HFO peak frequency varies continuously across the two-parameter landscape, providing a mechanistic account of the clinically observed shift from slow (200-300 Hz) to fast (300-400 Hz) HFOs between medication states. The character of the synchronization transition depends on baseline excitability, ranging from a sharp co-emergence of bursting and synchrony at low excitability to a decoupled two-stage process at intermediate excitability where burst recruitment precedes synchronization. The model generates testable predictions for future clinical and experimental studies, provides a numerical dissection of how mesoscopic LFP features map onto microscopic neuronal dynamics, and serves as a computational building block for future circuit-level models that can guide brain stimulation strategies tailored to the patient-specific dynamical state of the STN. Author summaryIn Parkinsons disease, local field potentials (LFP) from the subthalamic nucleus (STN) contain two prominent rhythms: a slow beta rhythm (13-30 Hz) and fast oscillations (200-400 Hz). In the parkinsonian state, these rhythms become coupled, with fast oscillation amplitude varying systematically with beta phase, a relationship absent in the medicated state. We built a biophysical spiking neuron network model that captures two key effects of dopamine depletion on STN neuronal activity: changes in the intrinsic neuronal excitability and changes in synaptic coupling strength. The model produces fast oscillations from rapid intraburst firing, while the slow beta rhythm and its coupling to fast oscillations emerge with the onset of synchronized bursting across the population. Importantly, the frequency of the fast oscillations shifts continuously depending on both parameters, explaining a puzzling clinical observation that these oscillations change frequency between medication states. The model also reproduces the modulation pattern in the spike-triggered average of HFO envelope amplitude reported in patient recordings, confirming consistency with single-unit observations as well as LFP-level spectral features. By mapping how multi-timescale LFP spectral features relate to the dynamical regime of the underlying neuronal population, this work offers a framework for brain stimulation strategies informed by patient-specific dynamical states.
Levy, A. D.; Zeidman, P. D.; Friston, K.
Show abstract
Cognitive processes such as decision-making, working memory, and motor planning operate across a hierarchy of timescales, manifesting as rapid neural transients alongside slower physiological mechanisms like short-term plasticity. Conventional Dynamic Causal Modelling (DCM) limits our ability to study these dynamics by assuming stationary parameters, whilst recent time-varying approaches often rely on segmenting data into epochs. This segmentation artificially resets neural states between windows, fundamentally obscuring the continuous hysteresis essential to sequential processing. To address this limitation, we introduce DCM for Sequential Responses (DCM-SR), a generative framework that embeds parameter evolution directly within the first-level model whilst employing a continuous state-space formulation that removes the requirement for epoching. This approach generalises non-stationarity to all neural mass parameters, including synaptic gains and time constants, modelling them as piecewise smooth trajectories that evolve alongside continuous neural states. Consequently, the model explicitly captures two distinct forms of temporal memory: transient history dependence, where responses are shaped by the carryover effect of recent perturbations, and path dependence, where the systems trajectory through parameter space determines its responsiveness. The framework accommodates both exogenous, stimulus-locked transitions and endogenous, autonomous state changes, permitting inference on both external perturbations and internal drivers of network evolution. Simulations establish the models face validity, demonstrating robust parameter recovery and conservative model selection that accurately discriminates between genuine parameter evolution and spurious complexity. We applied the framework to empirical data from an auditory go/no-go task, modelling a full sequence of cognitive phases from initial cue processing and anticipation through to motor preparation and execution. This analysis established construct validity by resolving the biophysical generators of the contingent negative variation, attributing this slow potential to sustained thalamocortical drive and deep-layer hyperpolarisation rather than superficial-layer activity. Furthermore, the model captured trial-specific modulations of the hyperdirect pathway during motor inhibition, tracking the dynamic interplay between prefrontal executive control and basal ganglia gating. DCM-SR offers the first principled approach to decomposing compound signals such as slow cortical potentials into evolving synaptic mechanisms and continuous state trajectories, and provides a necessary bridge for investigating the biophysical implementation of extended cognitive phenomena including evidence accumulation and physiological hysteresis.
Hassanejad Nazir, A.; Hellgren Kotaleski, J.; Liljenström, H.
Show abstract
As social beings, humans make decisions partly based on social interaction. Observing the behavior of others can lead to learning from and about them, potentially increasing trust and prompting trust-based behavioral changes. Observation-based decision making involves different neural structures. The orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) are known as neural structures mainly involved in processing emotional and cognitive decision values, respectively, while the anterior cingulate cortex (ACC) plays a pivotal role as a social hub, integrating the afferent expectancy signals from OFC and LPFC. This paper presents a neurocomputational model of the interplay between observational learning and trust, as well as their role in individual decision-making. Our model elucidates and predicts the emotional and rational behavioral changes of an individual influenced by observing the action-outcome association of an alleged expert. We have modeled the neurodynamics of three cortical structures (OFC, LPFC, and ACC) and their interactions, where the neural oscillatory properties, modeled with Dynamic Bayesian Probability, represent the observers attitude towards the expert and the decision options. As an example of an everyday behavioral situation related to climate change, we use the choice of transportation between home and work. The EEG-like simulation outputs from our model represent the presumed brain activity of an individual making such a choice, assuming the decision-maker is exposed to social information.